AI Act Compliance Dashboard 2.0

Educational Tool for EU AI Act Understanding

AI Act Implementation Timeline - Current Status

Key milestones and current obligations under the EU AI Act

Active Now

GPAI Obligations (Aug 2, 2025): General Purpose AI model providers must comply with transparency, copyright, and safety requirements. Code of Practice provides compliance pathway.

Next Deadline

High-Risk AI Systems (Aug 2, 2026): 281 days remaining for full compliance with risk management, conformity assessment, and documentation requirements.

Already Active

Prohibited AI Practices (Feb 2, 2025): Bans on social scoring, real-time biometric identification, and exploitative AI systems are in effect.

Filter by:
AI Act Risk-Based Approach

Understanding the EU's four-tier risk classification system for AI regulation

Unacceptable Risk

Prohibited
AI systems that pose unacceptable risks to fundamental rights and safety are banned outright under the AI Act.
Examples include:
  • Social scoring by public authorities
  • Real-time biometric identification in public spaces
  • AI systems exploiting vulnerabilities
  • Subliminal techniques causing harm

High Risk

Strict Requirements
AI systems that significantly impact health, safety, or fundamental rights require comprehensive compliance measures.
Key obligations:
  • Risk management systems
  • Technical documentation
  • Conformity assessment
  • Human oversight
  • Post-market monitoring

Limited Risk

Transparency
AI systems with limited risk must ensure users are clearly informed they're interacting with AI.
Examples include:
  • AI chatbots and virtual assistants
  • Deepfake detection systems
  • Emotion recognition systems
  • Biometric categorization systems

Minimal Risk

No Obligations
Most AI systems fall into this category with no specific legal obligations under the AI Act.
Examples include:
  • Spam filters
  • Video game AI
  • Basic recommendation systems
  • Simple automation tools
AI Act Compliance Assessment

Do this quick check to get a preliminary understanding if and how the AI Act may apply to your AI system or use case

1
2
3
4
5

Does your system meet the AI Act's definition of an AI system?

AI System Definition: A machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment, and that infers how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Yes, it's an AI system
My system uses machine learning, neural networks, or other AI techniques to make autonomous decisions or generate outputs
No, it's not an AI system
My system uses only traditional software logic, rule-based automation, or simple statistical methods
I'm not sure
I need help determining if my system qualifies as an AI system under the Act
Question 1 of 5

General Purpose AI Model Requirements

Obligations active since August 2, 2025 • Code of Practice published July 2025

Technical Documentation

Comprehensive documentation including model architecture, training process, data sources, and computational resources.

Transparency Requirements

Information for downstream providers and public summary of training data using EU standardized template.

Copyright Policy

Policies to respect intellectual property rights and comply with copyright law in training data usage.

Systemic Risk GPAI Models (≥10²⁵ FLOPs)

Additional requirements for the most advanced general purpose AI models with systemic risk capabilities

Model Evaluation

Systematic assessment of model capabilities, limitations, and potential systemic risks through internal and external evaluations.

Adversarial Testing

Red-teaming and stress testing to identify vulnerabilities and potential misuse scenarios.

Cybersecurity Measures

Robust security protocols to protect model integrity and prevent unauthorized access or manipulation.

Incident Reporting

Obligation to report serious incidents and malfunctions to the EU AI Office and national authorities.
Code of Practice Compliance Pathway

The GPAI Code of Practice, published July 10, 2025, provides a voluntary but highly recommended pathway for demonstrating compliance. Signatories benefit from:

  • Presumption of conformity with AI Act obligations
  • Reduced administrative burden in compliance demonstration
  • One-year grace period for full implementation (until August 2026)
  • Collaborative approach with the EU AI Office during initial implementation
High-Risk AI System Categories (Annex III)

AI systems that significantly impact health, safety, or fundamental rights requiring comprehensive compliance

Biometric Systems

Real-time and post remote biometric identification systems, except for specific law enforcement exemptions.

Critical Infrastructure

AI systems for managing and operating critical digital infrastructure, including traffic management.

Education & Training

AI systems for determining access or outcomes in educational institutions and vocational training.

Employment

AI systems for recruitment, promotion decisions, work assignment, and performance evaluation.

Essential Services

AI systems for creditworthiness assessment, insurance pricing, and access to essential services.

Law Enforcement

AI systems for individual risk assessment, lie detection, and evidence evaluation in legal contexts.
High-Risk AI System Requirements

Comprehensive obligations coming into force August 2, 2026 for all high-risk AI systems

Requirement Description Key Components Article Reference
Risk Management System Continuous iterative process throughout the AI system's lifecycle to identify, estimate, and evaluate risks. Risk identification, assessment, mitigation measures, regular monitoring Article 9
Data Governance Ensuring training, validation, and testing data sets are relevant, representative, and free from errors. Data quality management, bias mitigation, gap assessment Article 10
Technical Documentation Comprehensive documentation of the AI system's development, functionality, and performance. System description, algorithms, training methodologies, risk measures Article 11, Annex IV
Human Oversight Ensuring humans can understand, monitor, and intervene in AI system operations. Understanding capabilities, anomaly detection, intervention capabilities Article 14
Conformity Assessment Procedures to demonstrate compliance with AI Act requirements before market placement. Internal control, third-party assessment, CE marking, EU database registration Articles 43-46
Key AI Act Definitions

Essential terms and concepts from the EU AI Act with official references

AI System

A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Source: Article 3(1) AI Act

General Purpose AI Model

An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.
Source: Article 3(63) AI Act

Provider

A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
Source: Article 3(3) AI Act

Deployer

A natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
Source: Article 3(4) AI Act

Systemic Risk

A risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.
Source: Article 3(65) AI Act

Floating Point Operations (FLOPs)

A measure of computational work. Models trained with ≥10²⁵ FLOPs are presumed to have systemic risk capabilities and face additional obligations under the AI Act.
Threshold for Systemic Risk Classification (Article 51(2))
AI Act Implementation Timeline

Key dates and milestones for EU AI Act compliance with current status updates

August 1, 2024
Completed AI Act Entry into Force

The EU AI Act officially entered into force, beginning the phased implementation timeline for various AI system categories.

February 2, 2025
Active Prohibited AI Practices

Bans on unacceptable risk AI systems came into effect, including social scoring, real-time biometric identification in public spaces, and exploitative AI systems.

July 10, 2025
Published GPAI Code of Practice

The General Purpose AI Code of Practice was published, providing voluntary compliance guidance for GPAI model providers with presumption of conformity benefits.

August 2, 2025
In Effect GPAI Model Obligations

Obligations for general purpose AI model providers became applicable, including transparency, copyright, and safety requirements for systemic risk models.

August 2, 2026
~7 Months High-Risk AI Systems

Full obligations for high-risk AI systems will apply, including risk management, conformity assessment, technical documentation, and human oversight requirements.

⚠️ Note: Digital Omnibus proposal (Nov 2025) may extend this deadline by 24+ months. Legislative process ongoing.
August 2, 2027
Future Legacy GPAI Models

GPAI models placed on the market before August 2, 2025 must achieve full compliance with all applicable requirements.

AI Act Compliance Assistant

Hello! I'm your AI Act compliance assistant. How can I help you understand the EU AI Act requirements today?